We demonstrate the use of a variational method to determine a quantitativelower bound on the rate of convergence of Markov Chain Monte Carlo (MCMC)algorithms as a function of the target density and proposal density. The boundrelies on approximating the second largest eigenvalue in the spectrum of theMCMC operator using a variational principle and the approach is applicable toproblems with continuous state spaces. We apply the method to one dimensionalexamples with Gaussian and quartic target densities, and we contrast theperformance of the Random Walk Metropolis-Hastings (RWMH) algorithm with a``smart'' variant that incorporates gradient information into the trial moves.We find that the variational method agrees quite closely with numericalsimulations. We also see that the smart MCMC algorithm often fails to convergegeometrically in the tails of the target density except in the simplest case weexamine, and even then care must be taken to choose the appropriate scaling ofthe deterministic and random parts of the proposed moves. Again, this callsinto question the utility of smart MCMC in more complex problems. Finally, weapply the same method to approximate the rate of convergence inmultidimensional Gaussian problems with and without importance sampling. Therewe demonstrate the necessity of importance sampling for target densities whichdepend on variables with a wide range of scales.
展开▼